liability rule
Explainability matters: The effect of liability rules on the healthcare sector
Wei, Jiawen, Verona, Elena, Bertolini, Andrea, Mengaldo, Gianmarco
Explainability, the capability of an artificial intelligence system (AIS) to explain its outcomes in a manner that is comprehensible to human beings at an acceptable level, has been deemed essential for critical sectors, such as healthcare. Is it really the case? In this perspective, we consider two extreme cases, ``Oracle'' (without explainability) versus ``AI Colleague'' (with explainability) for a thorough analysis. We discuss how the level of automation and explainability of AIS can affect the determination of liability among the medical practitioner/facility and manufacturer of AIS. We argue that explainability plays a crucial role in setting a responsibility framework in healthcare, from a legal standpoint, to shape the behavior of all involved parties and mitigate the risk of potential defensive medicine practices.
- Asia > Singapore (0.05)
- Europe > France (0.04)
- North America > United States (0.04)
- (5 more...)
- Law > Torts Law (0.96)
- Health & Medicine > Therapeutic Area (0.93)
Regulating the future: A look at the EU's plan to reboot product liability rules for AI
A recently presented European Union plan to update long-standing product liability rules for the digital age -- including addressing rising use of artificial intelligence (AI) and automation -- took some instant flak from European consumer organization, BEUC, which framed the update as something of a downgrade by arguing EU consumers will be left less well protected from harms caused by AI services than other types of products. For a flavor of the sorts of AI-driven harms and risks that may be fuelling demands for robust liability protections, only last month the UK's data protection watchdog issued a blanket warning over pseudoscientific AI systems that claim to perform'emotional analysis' -- urging such tech should not be used for anything other than pure entertainment. While on the public sector side, back in 2020, a Dutch court found an algorithmic welfare risk assessment for social security claimants breached human rights law. And, in recent years, the UN has also warned over the human rights risks of automating public service delivery. Additionally, US courts' use of blackbox AI systems to make sentencing decisions -- opaquely baking in bias and discrimination -- has been a tech-enabled crime against humanity for years. BEUC, an umbrella consumer group which represents 46 independent consumer organisations from 32 countries, had been calling for years for an update to EU liability laws to take account of growing applications of AI and ensure consumer protections laws are not being outpaced.
- Law > Torts Law (1.00)
- Government > Regional Government > Europe Government (1.00)
- Law > Litigation (0.88)
- Information Technology > Security & Privacy (0.88)
Regulating AI – is the current legislation capable of dealing with AI? -- FCAI
How law regulates Artificial Intelligence (AI)? How do we ensure AI applications comply with existing legal rules and principles? Is new regulation needed and if yes, what type of regulation? These questions have gained increasing importance as AI deployment has increased across various sectors in our societies. Adopting new technological solutions has raised legislators' concern for the protection of fundamental rights both nationally in Finland and at the EU level. However, finding these answers is not easy. And the answers we find may be frustrating: varying from typical "it depends" to the self-evident "it's complicated", followed by the slightly more optimistic "we don't know yet".
- Law > Statutes (1.00)
- Government (1.00)
Liability Design for Autonomous Vehicles and Human-Driven Vehicles: A Hierarchical Game-Theoretic Approach
Di, Xuan, Chen, Xu, Talley, Eric
Autonomous vehicles (AVs) are inevitably entering our lives with potential benefits for improved traffic safety, mobility, and accessibility. However, AVs' benefits also introduce a serious potential challenge, in the form of complex interactions with human-driven vehicles (HVs). The emergence of AVs introduces uncertainty in the behavior of human actors and in the impact of the AV manufacturer on autonomous driving design. This paper thus aims to investigate how AVs affect road safety and to design socially optimal liability rules for AVs and human drivers. A unified game is developed, including a Nash game between human drivers, a Stackelberg game between the AV manufacturer and HVs, and a Stackelberg game between the law maker and other users. We also establish the existence and uniqueness of the equilibrium of the game. The game is then simulated with numerical examples to investigate the emergence of human drivers' moral hazard, the AV manufacturer's role in traffic safety, and the law maker's role in liability design. Our findings demonstrate that human drivers could develop moral hazard if they perceive their road environment has become safer and an optimal liability rule design is crucial to improve social welfare with advanced transportation technologies. More generally, the game-theoretic model developed in this paper provides an analytical tool to assist policy-makers in AV policymaking and hopefully mitigate uncertainty in the existing regulation landscape about AV technologies.
- North America > United States > California (0.04)
- North America > United States > Arizona > Yavapai County (0.04)
- North America > United States > Minnesota (0.04)
- (2 more...)
- Transportation > Ground > Road (1.00)
- Information Technology > Robotics & Automation (1.00)
- Automobiles & Trucks (1.00)
- Government > Regional Government > North America Government > United States Government (0.93)
- Information Technology > Game Theory (1.00)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles (1.00)
The legal issues of robotics
Robots are the technology of the future. But the current legal system is incapable of handling them. This generic statement is often the premise for considerations about the possibility of awarding rights (and liabilities) to these machines at some, less-than clearly identified, point in time. Discussing the adequacy of existing regulation in accommodating new technologies is certainly necessary, but the ontological approach is incorrect. The recent Resolution of the European Parliament (henceforth Resolution) has great political relevance and strategic importance in the development of a European Robotic Industry.
- Law (1.00)
- Information Technology > Security & Privacy (0.97)
Who's responsible if a robot runs amok?
We are seeing today novel expressions of artificial intelligence (AI) which were just a while ago the stuff of sci-fi: autonomous vehicles, self-learning machines, fiction-writing programs which may win literary prizes. Yet, what if the AI goes awry? What if an autonomous vehicle malfunctions and damages your property? What if an AI robot hacks into a smart city's network and steals every citizen's personal data? Will our current legal liability rules give us satisfactory outcomes when applied to such scenarios?
- Law (1.00)
- Information Technology > Security & Privacy (0.35)